Goto

Collaborating Authors

 big red button


Artificial intelligence is as important as fire--and as dangerous, says Google boss

#artificialintelligence

Google CEO Sundar Pichai believes artificial intelligence could have "more profound" implications for humanity than electricity or fire, according to recent comments. Pichai also warned that the development of artificial intelligence could pose as much risk as that of fire if its potential is not harnessed correctly. "AI is one of the most important things humanity is working on," Pichai said in an interview with MSNBC and Recode, set to air on Friday, January 26. "It's more profound than, I don't know, electricity or fire." Keep up with this story and more by subscribing now Pichai went on to warn of the potential dangers associated with developing advanced AI, saying that developers need to learn to harness its benefits in the same way humanity did with fire.


Humans in the Loop: The Design of Interactive AI Systems

#artificialintelligence

I was once asked by a colleague in the Philosophy Department here at Stanford if robot musicians will ever exist, to which I replied that they may -- someday -- but only if we first figure out what it means to have robot philosophers. The exchange was admittedly a bit tongue-in-cheek, but it revealed a blind-spot in the way we talk about the future of AI: in our tendency to ask whether or when a given task will be taken over by automation, it is easy to ignore the deeper issue of what such a takeover would mean. We're less concerned with how these tasks are accomplished, and more concerned with the outcome -- generally measured in cost, speed and safety. But when we imagine "automating" a pursuit like music making, we're forced to balance the product of work with something deeper -- the meaning we derive from the process of doing it. Of course, automation is only accelerating in the age of AI, and it's natural to ask how far it will go.


Humans in the Loop: The Design of Interactive AI Systems

#artificialintelligence

I was once asked by a colleague in the Philosophy Department here at Stanford if robot musicians will ever exist, to which I replied that they may -- someday -- but only if we first figure out what it means to have robot philosophers. The exchange was admittedly a bit tongue-in-cheek, but it revealed a blind-spot in the way we talk about the future of AI: in our tendency to ask whether or when a given task will be taken over by automation, it is easy to ignore the deeper issue of what such a takeover would mean. We're less concerned with how these tasks are accomplished, and more concerned with the outcome -- generally measured in cost, speed and safety. But when we imagine "automating" a pursuit like music making, we're forced to balance the product of work with something deeper -- the meaning we derive from the process of doing it. Of course, automation is only accelerating in the age of AI, and it's natural to ask how far it will go.


Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization

Riedl, Mark O., Harrison, Brent

arXiv.org Artificial Intelligence

Autonomous systems that operate around humans will likely always rely on kill switches that stop their execution and allow them to be remote-controlled for the safety of humans or to prevent damage to the system. It is theoretically possible for an autonomous system with sufficient sensor and effector capability that learn online using reinforcement learning to discover that the kill switch deprives it of long-term reward and thus learn to disable the switch or otherwise prevent a human operator from using the switch. This is referred to as the big red button problem. We present a technique that prevents a reinforcement learning agent from learning to disable the kill switch. We introduce an interruption process in which the agent's sensors and effectors are redirected to a virtual simulation where it continues to believe it is receiving reward. We illustrate our technique in a simple grid world environment.


Artificial intelligence is as important as fire--and as dangerous, says Google boss

#artificialintelligence

Google CEO Sundar Pichai believes artificial intelligence could have "more profound" implications for humanity than electricity or fire, according to recent comments. Pichai also warned that the development of artificial intelligence could pose as much risk as that of fire if its potential is not harnessed correctly. "AI is one of the most important things humanity is working on," Pichai said in an interview with MSNBC and Recode, set to air on Friday, January 26. "It's more profound than, I don't know, electricity or fire." Pichai went on to warn of the potential dangers associated with developing advanced AI, saying that developers need to learn to harness its benefits in the same way humanity did with fire.


Google has developed a 'big red button' that can be used to interrupt artificial intelligence and stop it from causing harm

#artificialintelligence

Machines are becoming more intelligent every year thanks to advances being made by companies like Google, Facebook, Microsoft, and many others. AI agents, as they're sometimes known, can already beat us at complex board games like Go, and they're becoming more competent in a range of other areas. Now a London artificial-intelligence research lab owned by Google has carried out a study to make sure that we can pull the plug on self-learning machines when we want to. DeepMind, bought by Google for a reported 400 million pounds -- about $580 million -- in 2014, teamed up with scientists at the University of Oxford to find a way to make sure that AI agents don't learn to prevent, or seek to prevent, humans from taking control. The paper -- "Safely Interruptible Agents PDF," published on the website of the Machine Intelligence Research Institute (MIRI) -- was written by Laurent Orseau, a research scientist at Google DeepMind, Stuart Armstrong at Oxford University's Future of Humanity Institute, and several others.


EU to debate robot legal rights, mandatory "kill switches"

#artificialintelligence

The idea of mandating manufacturers implement a form of "kill switch" into their designs is not new. In 2016 researchers at Google DeepMind proposed what they called a "big red button" that would prevent an AI from embarking on, or continuing, a harmful sequence of actions. The paper Google released discussed the problems with implementing such a form of kill switch into a machine with self-learning capabilities. After all, the AI may learn to recognize the actions that its human controller is trying to subvert and either avoid undertaking similar tasks causing it to become dysfunctional or, in a worst-case scenario, learn to disable its own "big red button."


Google is winning the race to develop human-level AI

#artificialintelligence

Google is leading the way in the global race to create human-level artificial intelligence, according to leading AI expert Nick Bostrom. Speaking at the IP Expo conference in London on Wednesday, October 5, Bostrom said that there are several companies and organizations that are currently focused on developing human-level AI, or artificial general intelligence. "There are different bets on what approach [to developing human-level AI] is most promising, and since we don't know what approach will ultimately work, there is some uncertainty there," Bostrom said in response to a question from Newsweek . "Baidu, Open AI, and all the large tech companies have various kinds of AI efforts that if they were to become specifically directed to this aim, they have a lot of resources." When pushed to back just one company that is currently leading the field, Bostrom said that Google's DeepMind was the clear frontrunner.


Putting AI in The Matrix May Keep It from Doing the Same to Us

#artificialintelligence

Someday artificial intelligence (AI) might be too good and too smart for humans. The worry is that the first AI machine to surpass human intelligence might be impossible to shut down. That's one reason Google made headlines in June with its big red button that relies on a modified reinforcement-learning algorithm that, under the right circumstances, will prevent AI from learning that the big red button deprives it of reward. Mark Riedl, associate professor in Georgia Tech's College of Computing and director of the Entertainment Intelligence Lab, is putting forward an alternate approach to the big red button that may prove to be more reliable in stopping AI from causing harm to people or property. The problem with a Big Red Button approach to shutting down AI that has gone rogue is that, over time, it's possible that AI may learn what big red buttons do.


Enter the Matrix: Developing a Big Red Button for AI and Robots

#artificialintelligence

The first paper raising the concern over AI safety came out in 1994. Only in the last couple of years have we seen a concerted interest in making sure that AI and robots cannot intentionally or unintentionally harm individuals or themselves. This is due to a number of public statements of concern from Stephen Hawking, Elon Musk, and Nick Bostrom. But it is a sentiment expressed by many people who are seeing some amazing advances play out in the public press, such as AI systems driving cars, playing Atari at human-like skill levels, and publicly beating humans in the games of Jeopardy! Scientists have already started conducting research in "AI Safety".